Goto

Collaborating Authors

 gpu technology


Nvidia will dominate this crucial part of the AI market for at least the next two years

#artificialintelligence

The principal tasks of artificial intelligence (AI) are training and inferencing. The former is a data-intensive process to prepare AI models for production applications. Training an AI model ensures that it can perform its designated inferencing task--such as recognizing faces or understanding human speech--accurately and in an automated fashion. Inferencing is big business and is set to become the biggest driver of growth in AI. McKinsey has predicted that the opportunity for AI inferencing hardware in the data center will be twice that of AI training hardware by 2025 ($9 billion to 10 billion vs. $4 billion to $5 billion today).


NVIDIA Pushes Its GPU Technology To The Front And Center Of Artificial Intelligence

#artificialintelligence

At the GPU Technology Conference (GTC) in Santa Clara, NVIDIA has unveiled a series of new products that accelerate the research and development of artificial intelligence.


Inference Emerges As Next AI Challenge

#artificialintelligence

As developers flock to artificial intelligence frameworks in response to the explosion of intelligence machines, training deep learning models has emerged as a priority along with synching them to a growing list of neural and other network designs. All are being aligned to confront some of the next big AI challenges, including training deep learning models to make inferences from the fire hose of unstructured data. These and other AI developer challenges were highlighted during this week's Nvidia GPU technology conference in Washington. The GPU leader uses the events to bolster its contention that GPUs--some with up to 5,000 cores--are filling the computing gap created by the decline of Moore's Law. The other driving force behind the "era of AI" is the emergence of algorithm-driven deep learning that is forcing developers to move beyond mere coding to apply AI to a growing range of automated processes and predictive analytics.


Visual Computing 2017: A Look At OpenCV - Go Parallel

#artificialintelligence

Ten years on from its first 1.0 release, OpenCV (Short for Open Computer Vision) is a C library that lets you create software with real-time computer vision. Originally developed in Russia for Intel, it's fully open source and includes machine learning. It's BSD licensed so can be used freely. This is a very large library, with over 200 MB in various DLLs though you can also statically link to it if you prefer. It has several hundred algorithms in C with C, Python, Java and MATLAB interfaces and runs on Windows, Linux, Android, iOS and Mac OS.


Why 2017 is setting up to be the year of GPU chips in deep learning

#artificialintelligence

GPU technology has been around for decades, but only recently has it gained traction among enterprises. It was traditionally used to enhance computer graphics, as the name suggests. But as deep learning and artificial intelligence have grown in prominence, the need for fast, parallel computation to train models has increased. "A couple years ago, we wouldn't be looking at special hardware for this," said Adrian Bowles, founder of analyst firm STORM Insights Inc. in Boston. "But with [deep learning], you have a lot of parallel activities going on, and GPU-based tools are going to give you more cores."


Is GPU technology giving Spark a flame? #BigDataNYC

#artificialintelligence

Gearing up for three days of coverage of BigDataNYC 2016 at 37 Pillars in New York City, the SiliconANGLE Media team and NVIDIA Corp. hosted The Future: AI-Driven Analytics, An Evening of Deep Learning. This event kicked off the conversation about deriving benefits from Big Data to advanced Artificial Intelligence (AI) and Machine Learning (ML). An event panel met to talk about deep learning, what it means, where it's headed and implications for next-gen apps. Panelists Jim McHugh, VP and GM of NVIDIA Corp.; Randy Swanberg, distinguished engineer at IBM; Ram Sriharsha, product manager, Apache Spark, at DataBricks, Inc.; and Josh Patterson, director of Field Engineering at Skymind joined host George Gilbert, (@ggilbert41), Big Data analyst at Wikibon and theCUBE cohost (from the SiliconANGLE Media team), to talk about deep learning and where is going in the future. Gilbert began the panel discussion by saying that the real advance that is impending right now is the magnitude of cores that use GPUs (Graphics Processing Unit) as auxiliary processing units, which he feels is going to change the future of where computation will go.


GPUs Reshape Computing

Communications of the ACM

NVidia's Titan X graphics card, featuring the company's Pascal-powered graphics processing unit driven by 3,584 CUDA cores running at 1.5GHz. As researchers continue to push the boundaries of neural networks and deep learning--particularly in speech recognition and natural language processing, image and pattern recognition, text and data analytics, and other complex areas--they are constantly on the lookout for new and better ways to extend and expand computing capabilities. For decades, the gold standard has been high-performance computing (HPC) clusters, which toss huge amounts of processing power at problems--albeit at a prohibitively high cost. This approach has helped fuel advances across a wide swath of fields, including weather forecasting, financial services, and energy exploration. However, in 2012, a new method emerged.